68 research outputs found

    Preterm Infants' Pose Estimation with Spatio-Temporal Features

    Get PDF
    Objective: Preterm infants' limb monitoring in neonatal intensive care units (NICUs) is of primary importance for assessing infants' health status and motor/cognitive development. Herein, we propose a new approach to preterm infants' limb pose estimation that features spatio-temporal information to detect and track limb joints from depth videos with high reliability. Methods: Limb-pose estimation is performed using a deep-learning framework consisting of a detection and a regression convolutional neural network (CNN) for rough and precise joint localization, respectively. The CNNs are implemented to encode connectivity in the temporal direction through 3D convolution. Assessment of the proposed framework is performed through a comprehensive study with sixteen depth videos acquired in the actual clinical practice from sixteen preterm infants (the babyPose dataset). Results: When applied to pose estimation, the median root mean square distance, computed among all limbs, between the estimated and the ground-truth pose was 9.06 pixels, overcoming approaches based on spatial features only (11.27 pixels). Conclusion: Results showed that the spatio-temporal features had a significant influence on the pose-estimation performance, especially in challenging cases (e.g., homogeneous image intensity). Significance: This article significantly enhances the state of art in automatic assessment of preterm infants' health status by introducing the use of spatio-temporal features for limb detection and tracking, and by being the first study to use depth videos acquired in the actual clinical practice for limb-pose estimation. The babyPose dataset has been released as the first annotated dataset for infants' pose estimation

    Preterm infants' limb-pose estimation from depth images using convolutional neural networks

    Get PDF
    Preterm infants' limb-pose estimation is a crucial but challenging task, which may improve patients' care and facilitate clinicians in infant's movements monitoring. Work in the literature either provides approaches to whole-body segmentation and tracking, which, however, has poor clinical value, or retrieve a posteriori limb pose from limb segmentation, increasing computational costs and introducing inaccuracy sources. In this paper, we address the problem of limb-pose estimation under a different point of view. We proposed a 2D fully-convolutional neural network for roughly detecting limb joints and joint connections, followed by a regression convolutional neural network for accurate joint and joint-connection position estimation. Joints from the same limb are then connected with a maximum bipartite matching approach. Our analysis does not require any prior modeling of infants' body structure, neither any manual interventions. For developing and testing the proposed approach, we built a dataset of four videos (video length = 90 s) recorded with a depth sensor in a neonatal intensive care unit (NICU) during the actual clinical practice, achieving median root mean square distance [pixels] of 10.790 (right arm), 10.542 (left arm), 8.294 (right leg), 11.270 (left leg) with respect to the ground-truth limb pose. The idea of estimating limb pose directly from depth images may represent a future paradigm for addressing the problem of preterm-infants' movement monitoring and offer all possible support to clinicians in NICUs

    Supervised cnn strategies for optical image segmentation and classification in interventional medicine

    Get PDF
    The analysis of interventional images is a topic of high interest for the medical-image analysis community. Such an analysis may provide interventional-medicine professionals with both decision support and context awareness, with the final goal of improving patient safety. The aim of this chapter is to give an overview of some of the most recent approaches (up to 2018) in the field, with a focus on Convolutional Neural Networks (CNNs) for both segmentation and classification tasks. For each approach, summary tables are presented reporting the used dataset, involved anatomical region and achieved performance. Benefits and disadvantages of each approach are highlighted and discussed. Available datasets for algorithm training and testing and commonly used performance metrics are summarized to offer a source of information for researchers that are approaching the field of interventional-image analysis. The advancements in deep learning for medical-image analysis are involving more and more the interventional-medicine field. However, these advancements are undeniably slower than in other fields (e.g. preoperative-image analysis) and considerable work still needs to be done in order to provide clinicians with all possible support during interventional-medicine procedures

    Heartbeat detection by laser doppler vibrometry and machine learning

    Get PDF
    Background: Heartbeat detection is a crucial step in several clinical fields. Laser Doppler Vibrometer (LDV) is a promising non-contact measurement for heartbeat detection. The aim of this work is to assess whether machine learning can be used for detecting heartbeat from the carotid LDV signal. Methods: The performances of Support Vector Machine (SVM), Decision Tree (DT), Random Forest (RF) and K-Nearest Neighbor (KNN) were compared using the leave-one-subject-out cross-validation as the testing protocol in an LDV dataset collected from 28 subjects. The classification was conducted on LDV signal windows, which were labeled as beat, if containing a beat, or no-beat, otherwise. The labeling procedure was performed using electrocardiography as the gold standard. Results: For the beat class, the f1-score (f 1) values were 0.93, 0.93, 0.95, 0.96 for RF, DT, KNN and SVM, respectively. No statistical differences were found between the classifiers. When testing the SVM on the full-length (10 min long) LDV signals, to simulate a real-world application, we achieved a median macro-f 1 of 0.76. Conclusions: Using machine learning for heartbeat detection from carotid LDV signals showed encouraging results, representing a promising step in the field of contactless cardiovascular signal analysis

    A cloud-based healthcare infrastructure for neonatal intensive-care units

    Get PDF
    Intensive medical attention of preterm babies is crucial to avoid short-term and long- term complications. Within neonatal intensive care units (NICUs), cribs are equipped with electronic devices aimed at: monitoring, administering drugs and supporting clinician in making diagnosis and offer treatments. To manage this huge data flux, a cloud-based healthcare infrastructure that allows data collection from different devices (i.e., patient monitors, bilirubinometers, and transcutaneous bilirubinometers), storage, processing and transferring will be presented. Communication protocols were designed to enable the communication and data transfer between the three different devices and a unique database and an easy to use graphical user interface (GUI) was implemented. The infrastructure is currently used in the “Women’s and Children’s Hospital G.Salesi” in Ancona (Italy), supporting clinicians and health opertators in their daily activities

    Learning-based screening of endothelial dysfunction from photoplethysmographic signals

    Get PDF
    Endothelial-Dysfunction (ED) screening is of primary importance to early diagnosis cardiovascular diseases. Recently, approaches to ED screening are focusing more and more on photoplethysmography (PPG)-signal analysis, which is performed in a threshold-sensitive way and may not be suitable for tackling the high variability of PPG signals. The goal of this work was to present an innovative machine-learning (ML) approach to ED screening that could tackle such variability. Two research hypotheses guided this work: (H1) ML can support ED screening by classifying PPG features; and (H2) classification performance can be improved when including also anthropometric features. To investigate H1 and H2, a new dataset was built from 59 subject. The dataset is balanced in terms of subjects with and without ED. Support vector machine (SVM), random forest (RF) and k-nearest neighbors (KNN) classifiers were investigated for feature classification. With the leave-one-out evaluation protocol, the best classification results for H1 were obtained with SVM (accuracy = 71%, recall = 59%). When testing H2, the recall was further improved to 67%. Such results are a promising step for developing a novel and intelligent PPG device to assist clinicians in performing large scale and low cost ED screening

    The babyPose dataset

    Get PDF
    none5noThe database here described contains data relevant to preterm infants' movement acquired in neonatal intensive care units (NICUs). The data consists of 16 depth videos recorded during the actual clinical practice. Each video consists of 1000 frames (i.e., 100s). The dataset was acquired at the NICU of the Salesi Hospital, Ancona (Italy). Each frame was annotated with the limb-joint location. Twelve joints were annotated, i.e., left and right shoul- der, elbow, wrist, hip, knee and ankle. The database is freely accessible at http://doi.org/10.5281/zenodo.3891404. This dataset represents a unique resource for artificial intelligence researchers that want to develop algorithms to provide healthcare professionals working in NICUs with decision support. Hence, the babyPose dataset is the first annotated dataset of depth images relevant to preterm infants' movement analysis.openMigliorelli L.; Moccia S.; Pietrini R.; Carnielli V.P.; Frontoni E.Migliorelli, L.; Moccia, S.; Pietrini, R.; Carnielli, V. P.; Frontoni, E

    MyDi application: Towards automatic activity annotation of young patients with Type 1 diabetes

    Get PDF
    Type I diabetes mellitus (T1DM) is a widespread metabolic disorder characterized by pancreatic insufficiency. People with T1DM require: a lifelong insulin injection, to constantly monitor glycemia and to take note of their activities. This continuous follow-up, especially at a very young age, may be challenging. Adolescents with T1DM may develop anxiety symptoms and depression which can lead to the loss of glycemic control. An assistive technology that automatizes the activity monitoring process could support these young patient in managing T1DM. The aim of this work is to present the MyDi framework which integrates a smart glycemic diary (for Android users), to automatically record and store patient's activity via pictures and a deep-learning (DL)-based technology able to classify the activity performed by the patients (i.e., meal and sport) via picture analysis. The proposed approach was tested on two different datasets, the Insta-Dataset with 3498 pictures (also used for training and validating the DL model) and the MyDi-Dataset with 126 pictures, achieving very encouraging results in both cases (Preci= 1.0, Reci= 1.0, f1i= 1.0 with i E C:[meal, sport]) prompting the possibility of translating this application in the T1DM monitoring process

    Evaluating the autonomy of children with autism spectrum disorder in washing hands: A deep-learning approach

    Get PDF
    Monitoring children with Autism Spectrum Dis-order (ASD) during the execution of the Applied Behaviour Analysis (ABA) program is crucial to assess the progresses while performing actions. Despite its importance, this monitoring procedure still relies on ABA operators' visual observation and manual annotation of the significant events. In this work a deep learning (DL) based approach has been proposed to evaluate the autonomy of children with ASD while performing the hand-washing task. The goal of the algorithm is the automatic detection of RGB frames in which the ASD child washes his/her hands autonomously (no-aid frames) or is supported by the operator (aid frames). The proposed approach relies on a pre-trained VGG16 convolutional network (CNN) modified to fulfill the binary classification task. The performance of the fine-tuned VGG16 was compared against that of other CNN architectures. The fine-tuned VGG16 achieved the best performance with a recall of 0.92 and 0.89 for the no-aid and aid class, respectively. These results prompt the possibility of translating the presented methodology into the actual monitoring practice. The integration of the presented tool with other computer-aided monitoring systems into a single framework, will provide fully support to ABA operators during the therapy session

    Quantitative Analysis of the Cervical Texture by Ultrasound and Correlation with Gestational Age

    Get PDF
    Objectives: Quantitative texture analysis has been proposed to extract robust features from the ultrasound image to detect subtle changes in the textures of the images. The aim of this study was to evaluate the feasibility of quantitative cervical texture analysis to assess cervical tissue changes throughout pregnancy. Methods: This was a cross-sectional study including singleton pregnancies between 20.0 and 41.6 weeks of gestation from women who delivered at term. Cervical length was measured, and a selected region of interest in the cervix was delineated. A model to predict gestational age based on features extracted from cervical images was developed following three steps: data splitting, feature transformation, and regression model computation. Results: Seven hundred images, 30 per gestational week, were included for analysis. There was a strong correlation between the gestational age at which the images were obtained and the estimated gestational age by quantitative analysis of the cervical texture (R = 0.88). Discussion: This study provides evidence that quantitative analysis of cervical texture can extract features from cervical ultrasound images which correlate with gestational age. Further research is needed to evaluate its applicability as a biomarker of the risk of spontaneous preterm birth, as well as its role in cervical assessment in other clinical situations in which cervical evaluation might be relevant
    corecore